83 research outputs found

    Particle-kernel estimation of the filter density in state-space models

    Get PDF
    Sequential Monte Carlo (SMC) methods, also known as particle filters, are simulation-based recursive algorithms for the approximation of the a posteriori probability measures generated by state-space dynamical models. At any given time tt, a SMC method produces a set of samples over the state space of the system of interest (often termed "particles") that is used to build a discrete and random approximation of the posterior probability distribution of the state variables, conditional on a sequence of available observations. One potential application of the methodology is the estimation of the densities associated to the sequence of a posteriori distributions. While practitioners have rather freely applied such density approximations in the past, the issue has received less attention from a theoretical perspective. In this paper, we address the problem of constructing kernel-based estimates of the posterior probability density function and its derivatives, and obtain asymptotic convergence results for the estimation errors. In particular, we find convergence rates for the approximation errors that hold uniformly on the state space and guarantee that the error vanishes almost surely as the number of particles in the filter grows. Based on this uniform convergence result, we first show how to build continuous measures that converge almost surely (with known rate) toward the posterior measure and then address a few applications. The latter include maximum a posteriori estimation of the system state using the approximate derivatives of the posterior density and the approximation of functionals of it, for example, Shannon's entropy. This manuscript is identical to the published paper, including a gap in the proof of Theorem 4.2. The Theorem itself is correct. We provide an {\em erratum} at the end of this document with a complete proof and a brief discussion.Comment: IMPORTANT: This manuscript is identical to the published paper, including a gap in the proof of Theorem 4.2. The Theorem itself is correct. We provide an erratum at the end of this document. Published at http://dx.doi.org/10.3150/13-BEJ545 the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm

    Two adaptive rejection sampling schemes for probability density functions log-convex tails

    Get PDF
    Monte Carlo methods are often necessary for the implementation of optimal Bayesian estimators. A fundamental technique that can be used to generate samples from virtually any target probability distribution is the so-called rejection sampling method, which generates candidate samples from a proposal distribution and then accepts them or not by testing the ratio of the target and proposal densities. The class of adaptive rejection sampling (ARS) algorithms is particularly interesting because they can achieve high acceptance rates. However, the standard ARS method can only be used with log-concave target densities. For this reason, many generalizations have been proposed. In this work, we investigate two different adaptive schemes that can be used to draw exactly from a large family of univariate probability density functions (pdf's), not necessarily log-concave, possibly multimodal and with tails of arbitrary concavity. These techniques are adaptive in the sense that every time a candidate sample is rejected, the acceptance rate is improved. The two proposed algorithms can work properly when the target pdf is multimodal, with first and second derivatives analytically intractable, and when the tails are log-convex in a infinite domain. Therefore, they can be applied in a number of scenarios in which the other generalizations of the standard ARS fail. Two illustrative numerical examples are shown

    Nudging the particle filter

    Get PDF
    We investigate a new sampling scheme aimed at improving the performance of particle filters whenever (a) there is a significant mismatch between the assumed model dynamics and the actual system, or (b) the posterior probability tends to concentrate in relatively small regions of the state space. The proposed scheme pushes some particles towards specific regions where the likelihood is expected to be high, an operation known as nudging in the geophysics literature. We re-interpret nudging in a form applicable to any particle filtering scheme, as it does not involve any changes in the rest of the algorithm. Since the particles are modified, but the importance weights do not account for this modification, the use of nudging leads to additional bias in the resulting estimators. However, we prove analytically that nudged particle filters can still attain asymptotic convergence with the same error rates as conventional particle methods. Simple analysis also yields an alternative interpretation of the nudging operation that explains its robustness to model errors. Finally, we show numerical results that illustrate the improvements that can be attained using the proposed scheme. In particular, we present nonlinear tracking examples with synthetic data and a model inference example using real-world financial data

    Adapting the Number of Particles in Sequential Monte Carlo Methods through an Online Scheme for Convergence Assessment

    Full text link
    Particle filters are broadly used to approximate posterior distributions of hidden states in state-space models by means of sets of weighted particles. While the convergence of the filter is guaranteed when the number of particles tends to infinity, the quality of the approximation is usually unknown but strongly dependent on the number of particles. In this paper, we propose a novel method for assessing the convergence of particle filters online manner, as well as a simple scheme for the online adaptation of the number of particles based on the convergence assessment. The method is based on a sequential comparison between the actual observations and their predictive probability distributions approximated by the filter. We provide a rigorous theoretical analysis of the proposed methodology and, as an example of its practical use, we present simulations of a simple algorithm for the dynamic and online adaption of the number of particles during the operation of a particle filter on a stochastic version of the Lorenz system

    A generalization of the adaptive rejection sampling algorithm

    Get PDF
    The original publication is available at www.springerlink.comRejection sampling is a well-known method to generate random samples from arbitrary target probability distributions. It demands the design of a suitable proposal probability density function (pdf) from which candidate samples can be drawn. These samples are either accepted or rejected depending on a test involving the ratio of the target and proposal densities. The adaptive rejection sampling method is an efficient algorithm to sample from a log-concave target density, that attains high acceptance rates by improving the proposal density whenever a sample is rejected. In this paper we introduce a generalized adaptive rejection sampling procedure that can be applied with a broad class of target probability distributions, possibly non-log-concave and exhibiting multiple modes. The proposed technique yields a sequence of proposal densities that converge toward the target pdf, thus achieving very high acceptance rates. We provide a simple numerical example to illustrate the basic use of the proposed technique, together with a more elaborate positioning application using real data.This work has been partially supported by the Ministry of Science and Innovation of Spain (project MONIN, ref. TEC-2006-13514-C02-01/TCM, project DEIPRO, ref. TEC-2009- 14504-C02-01 and program Consolider-Ingenio 2010 CSD2008- 00010 COMONSENS) and the Autonomous Community of Madrid (project PROMULTIDIS-CM, ref. S-0505/TIC/0233).Publicad

    Nested particle filters for online parameter estimation in discrete-time state-space Markov models

    Get PDF
    Documento depositado en el repositorio arXiv.org. VersiĂłn: arXiv:1308.1883v5 [stat.CO]We address the problem of approximating the posterior probability distribution of the fixed parameters of a state-space dynamical system using a sequential Monte Carlo method. The proposed approach relies on a nested structure that employs two layers of particle filters to approximate the posterior probability measure of the static parameters and the dynamic state variables of the system of interest, in a vein similar to the recent "sequential Monte Carlo square" (SMC2) algorithm. However, unlike the SMC2 scheme, the proposed technique operates in a purely recursive manner. In particular, the computational complexity of the recursive steps of the method introduced herein is constant over time. We analyse the approximation of integrals of real bounded functions with respect to the posterior distribution of the system parameters computed via the proposed scheme. As a result, we prove, under regularity assumptions, that the approximation errors vanish asymptotically in Lp (p≄1) with convergence rate proportional to 1N√+1M√, where N is the number of Monte Carlo samples in the parameter space and N×M is the number of samples in the state space. This result also holds for the approximation of the joint posterior distribution of the parameters and the state variables. We discuss the relationship between the SMC2 algorithm and the new recursive method and present a simple example in order to illustrate some of the theoretical findings with computer simulations.The work of the D. Crisan has been partially supported by the EPSRC grant no EP/N023781/1. The work of J. MĂ­guez was partially supported by t he Office of Naval Research Global (award no. N62909- 15-1-2011), Ministerio de EconomĂ­a y Competitividad of Spain (project TEC2015-69868-C2-1-R ADVENTURE) and Ministerio de EducaciĂłn, Cultura y Deporte of Spain (Programa Nacional de Movilidad de Recursos Humanos PRX12/00690). Part of this work was carried out while J. M. was a visitor at the Depar tment of Mathematics of Imperial College London, with partial support fr om an EPSRC Mathematics Platform grant. D. C. and J. M. would also like to acknow ledge the support of the Isaac Newton Institute through the program “Monte Carlo Inference for High-Dimensional Statistical Models”, as well as the constructive comme nts of an anonymous Reviewer, who helped improving the final version of this manuscrip
    • 

    corecore